-
Notifications
You must be signed in to change notification settings - Fork 87
Add Prompt dataclass with initial methods (JUD-2082) #562
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @adivate2021, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request introduces a comprehensive system for managing prompts within the application. It establishes a new Prompt
dataclass for structured prompt handling, extends the API with dedicated endpoints for prompt insertion and retrieval, and provides corresponding client-side methods and data models. This enhancement streamlines the process of defining, storing, and utilizing dynamic prompt templates.
Highlights
- New Prompt Dataclass: Introduced a new
Prompt
dataclass insrc/judgeval/prompts/prompt.py
to encapsulate prompt definition, storage, and compilation logic. This class includescreate
andget
class methods for interacting with the API, and acompile
method for variable substitution. - API Endpoint Additions: Added new API endpoints
/prompts/insert/
and/prompts/fetch/
to handle the creation and retrieval of prompts. These endpoints are integrated into the API generation scripts. - API Client Functionality: Implemented
prompts_insert
andprompts_fetch
methods within both the synchronous and asynchronous API clients (JudgmentSyncClient
andJudgmentAsyncClient
) to facilitate interaction with the new prompt management endpoints. - Data Model Definitions: Defined new
TypedDict
and PydanticBaseModel
classes (PromptInsertRequest
,PromptInsertResponse
,PromptFetchResponse
) to standardize the data structures for prompt-related API requests and responses.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point by creating a comment using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands on the current page.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in pull request comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a Prompt
dataclass along with methods to create, fetch, and compile prompts, and integrates them into the API. The changes are generally well-structured. However, I've identified a couple of critical issues in src/judgeval/prompts/prompt.py
related to unsafe dictionary access for optional fields, which could lead to KeyError
exceptions. Additionally, there are a few medium-severity issues concerning code style, a missing type hint, and a potential regression in error handling in prompt_scorer.py
that could affect user experience and code consistency. I've provided detailed comments and suggestions for each of these points.
src/judgeval/prompts/prompt.py
Outdated
r = client.prompts_insert( | ||
payload={"name": name, "prompt": prompt, "tags": tags} | ||
) | ||
return r["commit_id"], r["parent_commit_id"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The parent_commit_id
key is not guaranteed to be in the response dictionary r
as it is marked as NotRequired
in the PromptInsertResponse
type definition. Accessing it directly with r["parent_commit_id"]
will raise a KeyError
if the key is absent. You should use r.get("parent_commit_id")
for safe access.
return r["commit_id"], r["parent_commit_id"] | |
return r["commit_id"], r.get("parent_commit_id") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
key should always be there but added anyways
src/judgeval/prompts/prompt.py
Outdated
prompt=prompt_config["prompt"], | ||
tags=prompt_config["tags"], | ||
commit_id=prompt_config["commit_id"], | ||
parent_commit_id=prompt_config["parent_commit_id"], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The parent_commit_id
key is not guaranteed to be in the prompt_config
dictionary as it is marked as NotRequired
in the PromptFetchResponse
type definition. Accessing it directly with prompt_config["parent_commit_id"]
will raise a KeyError
if the key is absent. You should use prompt_config.get("parent_commit_id")
for safe access.
parent_commit_id=prompt_config["parent_commit_id"], | |
parent_commit_id=prompt_config.get("parent_commit_id"), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
key should always be present (may return none), but added anyways
src/judgeval/prompts/prompt.py
Outdated
tag: Optional[str] = None, | ||
judgment_api_key: str = os.getenv("JUDGMENT_API_KEY") or "", | ||
organization_id: str = os.getenv("JUDGMENT_ORG_ID") or "", | ||
): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The function fetch_prompt
is missing a return type hint. Based on its usage and the prompts_fetch
method it calls, the return type should be PromptFetchResponse
. Adding type hints improves code clarity and allows for better static analysis. You will also need to import PromptFetchResponse
from judgeval.api.api_types
.
): | |
) -> "PromptFetchResponse": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added
src/judgeval/prompts/prompt.py
Outdated
if not tags: | ||
tags = [] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While if not tags:
works, it's more idiomatic and explicit in Python to check for None
with if tags is None:
. This avoids potential confusion if an empty list is passed intentionally and you wanted to treat it differently from None
(though in this case the outcome is the same).
if not tags: | |
tags = [] | |
if tags is None: | |
tags = [] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
changed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Greptile Overview
Greptile Summary
This PR introduces a comprehensive prompt management system to the JudgmentLabs platform. The implementation adds a new Prompt
dataclass with version control capabilities, template compilation, and CRUD operations. The changes span multiple layers of the system: API endpoint definitions in scripts (api_generator.py
, openapi_transform.py
), auto-generated type definitions (api_types.py
, judgment_types.py
), API client methods (api/__init__.py
), and the main Prompt
class implementation (prompts/prompt.py
).
The core feature is a Git-like versioning system for prompts with commit tracking, tagging capabilities, and project association. The Prompt
class supports template compilation using double-brace syntax ({{variable}}
) converted to Python's Template
format. Users can create, fetch, tag, untag, and list prompt versions through both synchronous and asynchronous API methods. The implementation follows the existing codebase patterns and integrates cleanly with the current API client infrastructure.
Changed Files
Filename | Score | Overview |
---|---|---|
scripts/openapi_transform.py | 5/5 | Added 5 new prompt management endpoints to the JUDGEVAL_PATHS list for API generation |
scripts/api_generator.py | 5/5 | Added 5 new prompt-related endpoints to enable client method generation |
src/judgeval/api/api_types.py | 5/5 | Auto-generated TypedDict classes for prompt operations with version control support |
src/judgeval/data/judgment_types.py | 5/5 | Auto-generated Pydantic models for prompt management API endpoints |
src/judgeval/scorers/judgeval_scorers/api_scorers/prompt_scorer.py | 4/5 | Simplified error handling by removing special HTTP 500 error cases |
src/judgeval/api/init.py | 4/5 | Added five new prompt management methods to sync and async API clients |
src/judgeval/prompts/prompt.py | 4/5 | New Prompt dataclass with CRUD operations, versioning, and template compilation |
Confidence score: 4/5
- This PR introduces solid prompt management functionality with minimal risk, but has some implementation concerns that should be addressed
- Score reflects well-structured code following existing patterns, but missing type annotations and potential KeyError issues prevent a perfect score
- Pay close attention to src/judgeval/prompts/prompt.py for the missing return type annotation and safe dictionary access patterns
Sequence Diagram
sequenceDiagram
participant User
participant Prompt
participant JudgmentSyncClient
participant JudgmentAPI
User->>Prompt: "create(project_name, name, prompt, tags)"
Prompt->>JudgmentSyncClient: "prompts_insert(payload)"
JudgmentSyncClient->>JudgmentAPI: "POST /prompts/insert/"
JudgmentAPI-->>JudgmentSyncClient: "PromptInsertResponse"
JudgmentSyncClient-->>Prompt: "commit_id, parent_commit_id"
Prompt-->>User: "Prompt instance"
User->>Prompt: "get(project_name, name, commit_id/tag)"
Prompt->>JudgmentSyncClient: "prompts_fetch(name, project_name, commit_id, tag)"
JudgmentSyncClient->>JudgmentAPI: "GET /prompts/fetch/"
JudgmentAPI-->>JudgmentSyncClient: "PromptFetchResponse"
JudgmentSyncClient-->>Prompt: "prompt_config"
Prompt-->>User: "Prompt instance"
User->>Prompt: "compile(**kwargs)"
Prompt->>Prompt: "Template.substitute(**kwargs)"
Prompt-->>User: "compiled_prompt_string"
User->>Prompt: "tag(project_name, name, commit_id, tags)"
Prompt->>JudgmentSyncClient: "prompts_tag(payload)"
JudgmentSyncClient->>JudgmentAPI: "POST /prompts/tag/"
JudgmentAPI-->>JudgmentSyncClient: "PromptTagResponse"
JudgmentSyncClient-->>Prompt: "commit_id"
Prompt-->>User: "commit_id"
Additional Comments (1)
-
src/judgeval/scorers/judgeval_scorers/api_scorers/prompt_scorer.py
, line 88-98 (link)style: Inconsistent error handling - why keep 500 status code special handling here but remove it from push_prompt_scorer and fetch_prompt_scorer?
7 files reviewed, 4 comments
src/judgeval/prompts/prompt.py
Outdated
tag: Optional[str] = None, | ||
judgment_api_key: str = os.getenv("JUDGMENT_API_KEY") or "", | ||
organization_id: str = os.getenv("JUDGMENT_ORG_ID") or "", | ||
): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
style: Missing return type annotation - should specify return type for consistency with other functions
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/judgeval/prompts/prompt.py
Line: 44:44
Comment:
**style:** Missing return type annotation - should specify return type for consistency with other functions
How can I resolve this? If you propose a fix, please make it concise.
src/judgeval/prompts/prompt.py
Outdated
_template: Template = field(init=False, repr=False) | ||
|
||
def __post_init__(self): | ||
template_str = re.sub(r"\{\{(\w+)\}\}", r"$\1", self.prompt) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
logic: The regex pattern only captures word characters (\w+) - variables with hyphens, dots, or other characters won't be matched
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/judgeval/prompts/prompt.py
Line: 136:136
Comment:
**logic:** The regex pattern only captures word characters (\w+) - variables with hyphens, dots, or other characters won't be matched
How can I resolve this? If you propose a fix, please make it concise.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
changed to more general
"You cannot fetch a prompt by both commit_id and tag at the same time" | ||
) | ||
prompt_config = fetch_prompt(project_name, name, commit_id, tag) | ||
if prompt_config is None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
logic: This check may not work as expected - API responses typically don't return None, they raise exceptions or return empty objects
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/judgeval/prompts/prompt.py
Line: 167:167
Comment:
**logic:** This check may not work as expected - API responses typically don't return None, they raise exceptions or return empty objects
How can I resolve this? If you propose a fix, please make it concise.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
returns none for fetch_prompt w/out commit_id or tag (needed for creating prompt on platform website)
client = JudgmentSyncClient(judgment_api_key, organization_id) | ||
try: | ||
prompt_config = client.prompts_fetch(project_name, name, commit_id, tag) | ||
return prompt_config["commit"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[CriticalError]
Potential KeyError: fetch_prompt
returns prompt_config["commit"]
but if the API call succeeds but returns a response without the "commit" key, this will raise a KeyError. The API type shows PromptFetchResponse
has commit
as NotRequired[Optional[PromptCommitInfo]]
, meaning it might not be present in the response.
Suggested Change
client = JudgmentSyncClient(judgment_api_key, organization_id) | |
try: | |
prompt_config = client.prompts_fetch(project_name, name, commit_id, tag) | |
return prompt_config["commit"] | |
def fetch_prompt( | |
project_name: str, | |
name: str, | |
commit_id: Optional[str] = None, | |
tag: Optional[str] = None, | |
judgment_api_key: str = os.getenv("JUDGMENT_API_KEY") or "", | |
organization_id: str = os.getenv("JUDGMENT_ORG_ID") or "", | |
): | |
client = JudgmentSyncClient(judgment_api_key, organization_id) | |
try: | |
prompt_config = client.prompts_fetch(project_name, name, commit_id, tag) | |
return prompt_config.get("commit") | |
except JudgmentAPIError as e: | |
raise JudgmentAPIError( | |
status_code=e.status_code, | |
detail=f"Failed to fetch prompt '{name}': {e.detail}", | |
response=e.response, | |
) |
⚡ Committable suggestion
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
Context for Agents
[**CriticalError**]
Potential KeyError: `fetch_prompt` returns `prompt_config["commit"]` but if the API call succeeds but returns a response without the "commit" key, this will raise a KeyError. The API type shows `PromptFetchResponse` has `commit` as `NotRequired[Optional[PromptCommitInfo]]`, meaning it might not be present in the response.
<details>
<summary>Suggested Change</summary>
```suggestion
def fetch_prompt(
project_name: str,
name: str,
commit_id: Optional[str] = None,
tag: Optional[str] = None,
judgment_api_key: str = os.getenv("JUDGMENT_API_KEY") or "",
organization_id: str = os.getenv("JUDGMENT_ORG_ID") or "",
):
client = JudgmentSyncClient(judgment_api_key, organization_id)
try:
prompt_config = client.prompts_fetch(project_name, name, commit_id, tag)
return prompt_config.get("commit")
except JudgmentAPIError as e:
raise JudgmentAPIError(
status_code=e.status_code,
detail=f"Failed to fetch prompt '{name}': {e.detail}",
response=e.response,
)
```
⚡ **Committable suggestion**
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
</details>
File: src/judgeval/prompts/prompt.py
Line: 48
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
commit will always be a key (may be None)
@classmethod | ||
def get( | ||
cls, | ||
project_name: str, | ||
name: str, | ||
commit_id: Optional[str] = None, | ||
tag: Optional[str] = None, | ||
): | ||
if commit_id is not None and tag is not None: | ||
raise ValueError( | ||
"You cannot fetch a prompt by both commit_id and tag at the same time" | ||
) | ||
prompt_config = fetch_prompt(project_name, name, commit_id, tag) | ||
if prompt_config is None: | ||
raise ValueError(f"Prompt '{name}' not found in project '{project_name}'") | ||
return cls( | ||
name=prompt_config["name"], | ||
prompt=prompt_config["prompt"], | ||
tags=prompt_config["tags"], | ||
commit_id=prompt_config["commit_id"], | ||
parent_commit_id=prompt_config["parent_commit_id"], | ||
metadata={ | ||
"creator_first_name": prompt_config["first_name"], | ||
"creator_last_name": prompt_config["last_name"], | ||
}, | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[CriticalError]
Potential KeyError in dictionary access: The Prompt.get()
method assumes all expected keys exist in prompt_config
without validation. If the API response is missing required fields like "name", "prompt", "tags", etc., this will raise a KeyError.
Suggested Change
@classmethod | |
def get( | |
cls, | |
project_name: str, | |
name: str, | |
commit_id: Optional[str] = None, | |
tag: Optional[str] = None, | |
): | |
if commit_id is not None and tag is not None: | |
raise ValueError( | |
"You cannot fetch a prompt by both commit_id and tag at the same time" | |
) | |
prompt_config = fetch_prompt(project_name, name, commit_id, tag) | |
if prompt_config is None: | |
raise ValueError(f"Prompt '{name}' not found in project '{project_name}'") | |
return cls( | |
name=prompt_config["name"], | |
prompt=prompt_config["prompt"], | |
tags=prompt_config["tags"], | |
commit_id=prompt_config["commit_id"], | |
parent_commit_id=prompt_config["parent_commit_id"], | |
metadata={ | |
"creator_first_name": prompt_config["first_name"], | |
"creator_last_name": prompt_config["last_name"], | |
}, | |
) | |
@classmethod | |
def get( | |
cls, | |
project_name: str, | |
name: str, | |
commit_id: Optional[str] = None, | |
tag: Optional[str] = None, | |
): | |
if commit_id is not None and tag is not None: | |
raise ValueError( | |
"You cannot fetch a prompt by both commit_id and tag at the same time" | |
) | |
prompt_config = fetch_prompt(project_name, name, commit_id, tag) | |
if prompt_config is None: | |
raise ValueError(f"Prompt '{name}' not found in project '{project_name}'") | |
# Validate required fields exist | |
required_fields = ["name", "prompt", "tags", "commit_id", "first_name", "last_name"] | |
for field in required_fields: | |
if field not in prompt_config: | |
raise ValueError(f"Invalid API response: missing required field '{field}'") | |
return cls( | |
name=prompt_config["name"], | |
prompt=prompt_config["prompt"], | |
tags=prompt_config["tags"], | |
commit_id=prompt_config["commit_id"], | |
parent_commit_id=prompt_config.get("parent_commit_id"), | |
metadata={ | |
"creator_first_name": prompt_config["first_name"], | |
"creator_last_name": prompt_config["last_name"], | |
}, | |
) |
⚡ Committable suggestion
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
Context for Agents
[**CriticalError**]
Potential KeyError in dictionary access: The `Prompt.get()` method assumes all expected keys exist in `prompt_config` without validation. If the API response is missing required fields like "name", "prompt", "tags", etc., this will raise a KeyError.
<details>
<summary>Suggested Change</summary>
```suggestion
@classmethod
def get(
cls,
project_name: str,
name: str,
commit_id: Optional[str] = None,
tag: Optional[str] = None,
):
if commit_id is not None and tag is not None:
raise ValueError(
"You cannot fetch a prompt by both commit_id and tag at the same time"
)
prompt_config = fetch_prompt(project_name, name, commit_id, tag)
if prompt_config is None:
raise ValueError(f"Prompt '{name}' not found in project '{project_name}'")
# Validate required fields exist
required_fields = ["name", "prompt", "tags", "commit_id", "first_name", "last_name"]
for field in required_fields:
if field not in prompt_config:
raise ValueError(f"Invalid API response: missing required field '{field}'")
return cls(
name=prompt_config["name"],
prompt=prompt_config["prompt"],
tags=prompt_config["tags"],
commit_id=prompt_config["commit_id"],
parent_commit_id=prompt_config.get("parent_commit_id"),
metadata={
"creator_first_name": prompt_config["first_name"],
"creator_last_name": prompt_config["last_name"],
},
)
```
⚡ **Committable suggestion**
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
</details>
File: src/judgeval/prompts/prompt.py
Line: 179
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should exist
@classmethod | ||
def list(cls, project_name: str, name: str): | ||
prompt_configs = list_prompt(project_name, name)["versions"] | ||
return [ | ||
cls( | ||
name=prompt_config["name"], | ||
prompt=prompt_config["prompt"], | ||
tags=prompt_config["tags"], | ||
commit_id=prompt_config["commit_id"], | ||
parent_commit_id=prompt_config["parent_commit_id"], | ||
metadata={ | ||
"creator_first_name": prompt_config["first_name"], | ||
"creator_last_name": prompt_config["last_name"], | ||
"created_at": prompt_config["created_at"], | ||
}, | ||
) | ||
for prompt_config in prompt_configs | ||
] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[BestPractice]
Potential KeyError in dictionary access: The Prompt.list()
method assumes all expected keys exist in each prompt_config
item without validation. If any API response item is missing required fields, this will raise a KeyError.
Suggested Change
@classmethod | |
def list(cls, project_name: str, name: str): | |
prompt_configs = list_prompt(project_name, name)["versions"] | |
return [ | |
cls( | |
name=prompt_config["name"], | |
prompt=prompt_config["prompt"], | |
tags=prompt_config["tags"], | |
commit_id=prompt_config["commit_id"], | |
parent_commit_id=prompt_config["parent_commit_id"], | |
metadata={ | |
"creator_first_name": prompt_config["first_name"], | |
"creator_last_name": prompt_config["last_name"], | |
"created_at": prompt_config["created_at"], | |
}, | |
) | |
for prompt_config in prompt_configs | |
] | |
@classmethod | |
def list(cls, project_name: str, name: str): | |
prompt_configs = list_prompt(project_name, name)["versions"] | |
result = [] | |
for prompt_config in prompt_configs: | |
# Validate required fields exist | |
required_fields = ["name", "prompt", "tags", "commit_id", "first_name", "last_name", "created_at"] | |
for field in required_fields: | |
if field not in prompt_config: | |
raise ValueError(f"Invalid API response: missing required field '{field}' in prompt version") | |
result.append(cls( | |
name=prompt_config["name"], | |
prompt=prompt_config["prompt"], | |
tags=prompt_config["tags"], | |
commit_id=prompt_config["commit_id"], | |
parent_commit_id=prompt_config.get("parent_commit_id"), | |
metadata={ | |
"creator_first_name": prompt_config["first_name"], | |
"creator_last_name": prompt_config["last_name"], | |
"created_at": prompt_config["created_at"], | |
}, | |
)) | |
return result |
⚡ Committable suggestion
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
Context for Agents
[**BestPractice**]
Potential KeyError in dictionary access: The `Prompt.list()` method assumes all expected keys exist in each `prompt_config` item without validation. If any API response item is missing required fields, this will raise a KeyError.
<details>
<summary>Suggested Change</summary>
```suggestion
@classmethod
def list(cls, project_name: str, name: str):
prompt_configs = list_prompt(project_name, name)["versions"]
result = []
for prompt_config in prompt_configs:
# Validate required fields exist
required_fields = ["name", "prompt", "tags", "commit_id", "first_name", "last_name", "created_at"]
for field in required_fields:
if field not in prompt_config:
raise ValueError(f"Invalid API response: missing required field '{field}' in prompt version")
result.append(cls(
name=prompt_config["name"],
prompt=prompt_config["prompt"],
tags=prompt_config["tags"],
commit_id=prompt_config["commit_id"],
parent_commit_id=prompt_config.get("parent_commit_id"),
metadata={
"creator_first_name": prompt_config["first_name"],
"creator_last_name": prompt_config["last_name"],
"created_at": prompt_config["created_at"],
},
))
return result
```
⚡ **Committable suggestion**
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
</details>
File: src/judgeval/prompts/prompt.py
Line: 208
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should exist
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
src/judgeval/prompts/prompt.py
Outdated
tag: Optional[str] = None, | ||
judgment_api_key: str = os.getenv("JUDGMENT_API_KEY") or "", | ||
organization_id: str = os.getenv("JUDGMENT_ORG_ID") or "", | ||
): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1
src/e2etests/test_prompts.py
Outdated
assert prompt_list[0].prompt == "version 3", "First prompt should be version 1" | ||
assert prompt_list[1].prompt == "version 2", "Second prompt should be version 2" | ||
assert prompt_list[2].prompt == "version 1", "Third prompt should be version 3" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[BestPractice]
The assertion messages here are misleading. The test correctly asserts that the prompts are listed from newest to oldest (v3, v2, v1), but the failure messages state the opposite order. Updating them will improve clarity if this test fails in the future.
assert prompt_list[0].prompt == "version 3", "First prompt should be version 1" | |
assert prompt_list[1].prompt == "version 2", "Second prompt should be version 2" | |
assert prompt_list[2].prompt == "version 1", "Third prompt should be version 3" | |
assert prompt_list[0].prompt == "version 3", "First prompt in list should be the latest (version 3)" | |
assert prompt_list[1].prompt == "version 2", "Second prompt in list should be version 2" | |
assert prompt_list[2].prompt == "version 1", "Third prompt in list should be the oldest (version 1)" |
⚡ Committable suggestion
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
Context for Agents
[**BestPractice**]
The assertion messages here are misleading. The test correctly asserts that the prompts are listed from newest to oldest (v3, v2, v1), but the failure messages state the opposite order. Updating them will improve clarity if this test fails in the future.
```suggestion
assert prompt_list[0].prompt == "version 3", "First prompt in list should be the latest (version 3)"
assert prompt_list[1].prompt == "version 2", "Second prompt in list should be version 2"
assert prompt_list[2].prompt == "version 1", "Third prompt in list should be the oldest (version 1)"
```
⚡ **Committable suggestion**
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
File: src/e2etests/test_prompts.py
Line: 290
src/judgeval/api/__init__.py
Outdated
def prompts_fetch( | ||
self, | ||
name: str, | ||
project_name: Optional[str] = None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why does this require both? can we make the server endpoint only require project_Id? and we can resolve it locally and cache it once.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also can we make project_id only not optional
src/judgeval/api/__init__.py
Outdated
def prompts_get_prompt_versions( | ||
self, | ||
name: str, | ||
project_id: Optional[str] = None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same here can we make only project_Id mantatory
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
src/judgeval/api/__init__.py
Outdated
name: str, | ||
project_id: Optional[str] = None, | ||
project_name: Optional[str] = None, | ||
get_user_avatars: Optional[str] = None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what is get_user_avatars that doesnt seem like a flag that should live tied to this function in the backend
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removed
src/judgeval/prompts/prompt.py
Outdated
name: str, | ||
prompt: str, | ||
tags: List[str], | ||
judgment_api_key: str = os.getenv("JUDGMENT_API_KEY") or "", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(nit) Can we use the JUDGMENT_API_KEY and JUDGMENT_ORG_ID variables from env.py
everywhere in this file
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
changed
✔️ Propel has finished reviewing this change. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm (from @adivate2021 and @abhishekg999)
Backend PR:
https://github.com/JudgmentLabs/judgment/pull/778